Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Fast convergence average TimeSynch algorithm for apron sensor network
CHEN Weixing, LIU Qingtao, SUN Xixi, CHEN Bin
Journal of Computer Applications    2020, 40 (11): 3407-3412.   DOI: 10.11772/j.issn.1001-9081.2020030290
Abstract322)      PDF (665KB)(242)       Save
The traditional Average TimeSynch (ATS) for APron Sensor Network (APSN) has slow convergence and low algorithm efficiency due to its distributed iteration characteristics, based on the principle that the algebraic connectivity affects the convergence speed of the consensus algorithm, a Fast Convergence Average TimeSynch (FCATS) was proposed. Firstly, the virtual link was added between the two-hop neighbor nodes in APSN to increase the network connectivity. Then, the relative clock skew, logical clock skew and offset of the node were updated based on the information of the single-hop and two-hop neighbor nodes. Finally, according to the clock parameter update process, the consensus iteration was performed. The simulation results show that FCATS can be converged after the consensus iteration. Compared with ATS, it has the convergence speed increased by about 50%. And under different topological conditions, the convergence speed of it can be increased by more than 20%. It can be seen that the convergence speed is significantly improved.
Reference | Related Articles | Metrics
Computation offloading method for workflow management in mobile edge computing
FU Shucun, FU Zhangjie, XING Guowen, LIU Qingxiang, XU Xiaolong
Journal of Computer Applications    2019, 39 (5): 1523-1527.   DOI: 10.11772/j.issn.1001-9081.2018081753
Abstract748)      PDF (853KB)(438)       Save
The problem of high energy consumption for mobile devices in mobile edge computing is becoming increasingly prominent. In order to reduce the energy consumption of the mobile devices, an Energy-aware computation Offloading for Workflows (EOW) was proposed. Technically, the average waiting time of computing tasks in edge devices was analyzed based on queuing theory, and the time consumption and energy consumption models for mobile devices were established. Then a corresponding computation offloading method, by leveraging NSGA-Ⅲ (Non-dominated Sorting Genetic Algorithm Ⅲ) was designed to offload the computing tasks reasonably. Part computing tasks were processed by the mobile devices, or offloaded to the edge computing platform and the remote cloud, achieving the goal of energy-saving for all the mobile devices. Finally, comparison experiments were conducted on the CloudSim platform. The experimental results show that EOW can effectively reduce the energy consumption of all the mobile devices and satisfy the deadline of all the workflows.
Reference | Related Articles | Metrics
Trajectory privacy-preserving method based on information entropy suppression
WANG Yifei, LUO Yonglong, YU Qingying, LIU Qingqing, CHEN Wen
Journal of Computer Applications    2018, 38 (11): 3252-3257.   DOI: 10.11772/j.issn.1001-9081.2018040861
Abstract647)      PDF (1005KB)(458)       Save
Aiming at the problem of poor data anonymity and large data loss caused by excessive suppression of traditional high-dimensional trajectory privacy protection model, a new trajectory-privacy method based on information entropy suppression was proposed. A flowgraph based on entropy was generated for the trajectory dataset, a reasonable cost function according to the information entropy of spatio-temproal points was designed, and the privacy was preserved by local suppression of spatio-temproal points. Meanwhile, an improved algorithm for comparing the similarity of flowgraphs before and after suppression was proposed, and a function for evaluating the privacy gains was introduced.Finally, the proposed method was compared with the LK-Local (Length K-anonymity based on Local suppression) approach in trajectory privacy and data practicability. The experimental results on a synthetic subway transportation system dataset show that, with the same anonymous parameter value the proposed method increases the similarity measure by about 27%, reduces the data loss by about 25%, and increases the privacy gain by about 21%.
Reference | Related Articles | Metrics
Hybrid imperialist competitive algorithm for solving job-shop scheduling problem
YANG Xiaodong, KANG Yan, LIU Qing, SUN Jinwen
Journal of Computer Applications    2017, 37 (2): 517-522.   DOI: 10.11772/j.issn.1001-9081.2017.02.0517
Abstract564)      PDF (1017KB)(583)       Save
For the Job-shop Scheduling Problem (JSP) with the objective of minimizing the makespan, a hybrid algorithm combining with Imperialist Competitive Algorithm (ICA) and Tabu Search (TS) was proposed. Based on imperialist competitive algorithm, crossover operator and mutation operator of Genetic Algorithm (GA) were applied in the hybrid algorithm as assimilation to strengthen its global search ability. To overcome the weakness of imperialist competitive algorithm in local search, TS algorithm was used to improve the offspring of assimilation. The hybrid neighborhood structure and a novel selection strategy were used by TS to make the search more efficient. By combining with the ability of global search and local search, testing on the 13 classic benchmark scheduling problems and comparing with other four hybrid algorithms in recent years, the experimental results show that the proposed hybrid algorithm is effective and stable.
Reference | Related Articles | Metrics
Clothing retrieval based on landmarks
CHEN Aiai, LI Lai, LIU Guangcan, LIU Qingshan
Journal of Computer Applications    2017, 37 (11): 3249-3255.   DOI: 10.11772/j.issn.1001-9081.2017.11.3249
Abstract555)      PDF (1166KB)(622)       Save
At present, the same or similar style clothing retrieval is mainly text-based or content-based. The text-based algorithms often require massive labled samples, and the shortages of exist label missing and annotation difference caused by artificial subjectivity. The content-based algorithms usually extract image features, such as color, shape, texture, and then measured the similarity, but it is difficult to deal with background color interference, and clothing deformation due to different angles, attitude, etc. Aiming at these problems, clothing retrieval based on landmarks was proposed. The proposed method used cascaded deep convolutional neural network to locate the key points and combined the low-level visual information of the key point region as well as the high-level semantic information of the whole image. Compared with traditional methods, the proposed method can effectively deal with the distortion of clothing and complex background interference due to angle of view and attitude. Meanwhile, it does not need huge labeled samples, and is robust to background and deformation. Experiments on two large scale datasets Fashion Landmark and BDAT-Clothes show that the proposed algorithm can effectively improve the precision and recall.
Reference | Related Articles | Metrics
Parallel sparse subspace clustering via coordinate descent minimization
WU Jieqi, LI Xiaoyu, YUAN Xiaotong, LIU Qingshan
Journal of Computer Applications    2016, 36 (2): 372-376.   DOI: 10.11772/j.issn.1001-9081.2016.02.0372
Abstract690)      PDF (877KB)(961)       Save
Since the rapidly increasing data scale imposes a great computational challenge to the problem of Sparse Subspace Clustering (SSC), the existing optimization algorithms e.g. ADMM (Alternating Direction Method of Multipliers) for SSC are implemented in a sequential way which is unable to make use of multi-core processors to improve computational efficiency. To address this issue, a parallel SSC based on coordinate descent was proposed,inspired by a simple observation that the SSC can be formulated as a sequence of sample based sparse self-expression sub-problems. The proposed algorithm solves individual sub-problems by using a coordinate descent algorithm with fewer parameters and fast convergence. Based on the fact that the self-expression sub-problems are independent, a strategy was adopted to solve these sub-problems simultaneously on different processor cores, which brings the benefits of low computer resource consumption and fast running speed, it means that that the proposed algorithm is suitable for large scale clustering. Experiments on simulated data and Hopkins-155 motion segmentation dataset demonstrate that the proposed parallel SSC method on multi-core processors significantly improves the computational efficiency and ensures the accuracy when compared with ADMM.
Reference | Related Articles | Metrics
Distributed deduplication storage system based on Hadoop platform
LIU Qing, FU Yinjin, NI Guiqiang, MEI Jianmin
Journal of Computer Applications    2016, 36 (2): 330-335.   DOI: 10.11772/j.issn.1001-9081.2016.02.0330
Abstract864)      PDF (985KB)(1310)       Save
Focusing on the issues that there is a lot of data redundancy in data center, especially the backup data has caused a tremendous waste on storage space, a deduplication prototype based on Hadoop platform was proposed. Deduplication technology which detects and eliminates redundant data in a particular data set can greatly reduce the data storage capacity and optimize the utilization of storage space. Using the two big data management tools——Hadoop Distributed File System (HDFS) and non-relational database HBase, a scalable and distributed deduplication storage system was designed and implemented. In this system, the MapReduce parallel programming framework was responsible for parallel deduplication, and HDFS was responsible for data storage after deduplication. The index table was stored in HBase for efficient chunk fingerprint indexing. The system was also tested with virtual machine image file sets. The results demonstrate that the Hadoop based distributed deduplication system can ensure high throughput and excellent scalability as well as guaranting high deduplication rate.
Reference | Related Articles | Metrics
Application of symbiotic system-based artificial fish school algorithm in feed formulation optimization
LIU Qing, LI Ying, QING Maiyu, ODAKA Tomohiro
Journal of Computer Applications    2016, 36 (12): 3303-3310.   DOI: 10.11772/j.issn.1001-9081.2016.12.3303
Abstract445)      PDF (1134KB)(428)       Save
In consideration of intelligence algorithms' extensive applicability to various types of feed formulation optimization models, the Artificial Fish Swarm Algorithm (AFSA) was firstly applied in feed formulation optimization. For meeting the required precision of feed formulation optimization, a symbiotic system-based AFSA was employed. which significantly improved the convergence accuracy and speed compared with the original AFSA. In the process of optimization, the positions of Artificial Fish (AF) individuals in solution space were directly coded as the form of solution vector to the problem via the feed ratio, a penalty-based objective function was employed to evaluate AF individuals' fitness. AF individuals performed several behavior operators to explore the solution space according to a predefined behavioral strategy. The validity of the proposed algorithm was verified on three practical instances. The verification results show that, the proposed algorithm has worked out the optimal feed formulation, which can not only remarkably reduce the fodder cost, but also satisfy various nutrition constraints. The optimal performance of the proposed algorithm is superior to the other existing algorithms.
Reference | Related Articles | Metrics
Matrix-structural fast learning of cascaded classifier for negative sample inheritance
LIU Yang, YAN Shengye, LIU Qingshan
Journal of Computer Applications    2015, 35 (9): 2596-2601.   DOI: 10.11772/j.issn.1001-9081.2015.09.2596
Abstract438)      PDF (930KB)(324)       Save
Due to the disadvantages such as inefficiency of getting high-quality samples, bad impact of bootstrap to the whole learning-efficiency and final classifier performance in the negative samples bootstrap process of matrix-structural learning of cascade classifier algorithm. This paper proposed a fast learning algorithm-matrix-structural fast learning of cascaded classifier for negative sample inheritance. The negative sample bootstrap process of this algorithm combined sample inheritance and gradation bootstrap, which inherited helpful samples from the negative sample set used by last training stage firstly, and then got insufficient part of sample set from the negative image set. Sample inheritance reduced the bootstrap range of useful samples, which accelerated bootstrap. And sample pre-screening, during bootstrap process, increased sample complexity and promoted final classifier performance. The experiment results show that the proposed algorithm saves 20h in training time and improves 1 percentage point in detection performance, compared with matrix-structural learning of cascaded classifier algorithm. Besides, compared with other 17 human detection algorithms, the proposed algorithm achieves good performance too. The proposed algorithm gets great improvement in training efficiency and detection performance compared with matrix-structural learning of cascaded classifier algorithm.
Reference | Related Articles | Metrics
Fast super-resolution reconstruction for single image based on predictive sparse coding
SHEN Hui, YUAN Xiaotong, LIU Qingshan
Journal of Computer Applications    2015, 35 (6): 1749-1752.   DOI: 10.11772/j.issn.1001-9081.2015.06.1749
Abstract647)      PDF (648KB)(536)       Save

The classic super-resolution algorithm via sparse coding has high computational cost during the reconstruction phase. In view of the disadvantages, a predictive sparse coding-based single image super-resolution method was proposed. In the training phase, the proposed method imposed a code prediction error term to the traditional sparse coding error function, and used an alternating minimization procedure to minimize the resultant objective function. In the testing phase, the reconstruction coefficient could be estimated by simply multiplying the low-dimensional image patch with the low-dimensional dictionary, without any need to solve sparse regression problems. The experimental results demonstrate that, compared with the classic single image super-resolution algorithm via sparse coding, the proposed method is able to significantly reduce the reconstruction time while maintaining super-resolution visual effect.

Reference | Related Articles | Metrics
Collision attack on Zodiac algorithm
LIU Qing WEI Hongru PAN Wei
Journal of Computer Applications    2014, 34 (1): 73-77.   DOI: 10.11772/j.issn.1001-9081.2014.01.0073
Abstract401)      PDF (711KB)(534)       Save
In order to research the ability of Zodiac algorithm against the collision attack, two 8-round and 9-round distinguishers of Zodiac algorithm based on an equivalent structure of it were proposed. Firstly, collision attacks were applied to the algorithm from 12-round to 16-round by adding proper rounds before or after the 9-round distinguishers. The data complexities were 215, 231.2, 231.5, 231.7and 263.9, and the time complexities were 233.8, 249.9, 275.1, 2108and 2140.1, respectively. Then the 8-round distinguishers were applied to the full-round algorithm. The data complexity and time complexity were 260.6 and 2173.9, respectively. These results show that both full-round Zodiac-192 and full-round Zodiac-256 are not immune to collision attack.
Related Articles | Metrics
Task allocation based on ant colony optimization in cloud computing
ZHANG Chun-yan LIU Qing-lin MENG Ke
Journal of Computer Applications    2012, 32 (05): 1418-1420.  
Abstract1392)      PDF (1547KB)(1089)       Save
Concerning the defects of the Ant Colony Optimization (ACO) for the task allocation, a grouping and polymorphic ACO was proposed to improve the service quality. The algorithm, which divided the ants into three groups: searching ants, scouting ants and working ants, with the update of forecast completion time to gradually get the minimum of the average completion time and to decrease the possibility of generation to local optimum, was emulated and achieved with Cloudsim tookit at last. Results of the experiment show that the time of handling requests and tasks of this approach has been reduced and the efficiency of handling tasks gets improved.
Reference | Related Articles | Metrics
New scheme for image transmission based on SPIHT
FU Yao LIU Qing-li
Journal of Computer Applications    2012, 32 (04): 1144-1146.   DOI: 10.3724/SP.J.1087.2012.01144
Abstract849)      PDF (441KB)(343)       Save
In this paper, a new real-time image transmission scheme based on Set Partitioning In Hierarchical Tree (SPIHT) was proposed. Firstly, the image data needed to be transformed by wavelet. Secondly, in order to resist error pervasion when image was transmitted, the wavelet coefficients were separated into small blocks and encoded by SPIHT. Finally, in order to improve the quality of the restructured image, the wavelet coefficients of the highest level in every block were transmitted repeatedly. In order to improve the throughput of the image transmission system, the optimum frame length was proposed. Both theoretical demonstration and simulation results here have validated that the proposed scheme provides stronger error resilience than traditional scheme based on SPIHT, and can improve the peak signal to noise ratio of the restructured image about 10dB.
Reference | Related Articles | Metrics
Optimization of sparse data sets to improve quality of collaborative filtering systems
LIU Qing-peng CHEN Ming-rui
Journal of Computer Applications    2012, 32 (04): 1082-1085.   DOI: 10.3724/SP.J.1087.2012.01082
Abstract1202)      PDF (625KB)(553)       Save
Currently, the collaborative filtering is one of the successful and better personalized recommendation technologies that have been applied to the personalized recommendation systems. As the number of users and items increase dramatically, the score matrix which reflects the users preference information is very sparse. The sparse matrix seriously affects the recommendation quality of collaborative filtering. To solve this problem, this paper presented a comprehensive mean optimal filling method. Compared to the default method and the mode method, this method has two advantages. First, the method takes account of user rating scale issues. Second, the method does not have the "multiple mode" and the "no mode" problems. On the same data set, using traditional user-based collaborative filtering to test the effectiveness of the method, and the results prove that the new method can improve the recommendation quality of recommendation systems.
Reference | Related Articles | Metrics
Attack detection method based on statistical process control in collaborative recommender system
LIU Qing-lin MENG Ke LI Su-feng
Journal of Computer Applications    2012, 32 (03): 707-709.   DOI: 10.3724/SP.J.1087.2012.00707
Abstract1047)      PDF (471KB)(603)       Save
Because of the open nature of collaborative recommender systems and their reliance on user-specified judgments for building profiles, an attacker could affect the prediction by injecting a lot of biased data. In order to keep the authenticity of recommendations, the attack detection method based on Statistical Process Control (SPC) was proposed. The method constructed the Shewhart control chart by using the users' deviation from the average of rating numbers and detected attackers according to the warning rules of the chart, thus improving the robustness of collaborative recommender systems. The experiments demonstrate that the method is effective with high precision and high recall against a variety of attack models.
Reference | Related Articles | Metrics
Parallel matrix multiplication based on MPI+CUDA asynchronous model
LIU Qing-kun MA Ming-wei YAN Wei-chun
Journal of Computer Applications    2011, 31 (12): 3327-3330.  
Abstract1359)      PDF (655KB)(717)       Save
Matrix multiplication plays an important role in scientific computing. Different structural models can improve the performance of parallel matrix multiplication. In the existing MPI+CUDA synchronization model, the host-side need enter the waiting state and cannot continue to work until the device completes the task, which obviously wastes time. Concerning this question, a parallel matrix multiplication based on MPI+CUDA asynchronous model was proposed. This model prevented host-sides entering into the waiting state, and used CUDA-stream technology to solve the problem of data bulk over GPU memory. By analyzing the speedup ratio and efficiency of the asynchronous model, the experimental results show that MPI+CUDA parallel programming obviously promotes parallel efficiency and large-scale matrix multiplication’s speed,which exerts the advantages of the distributional memory between the nodes and the share memory in the node. It is an effective and feasible parallel strategy.
Related Articles | Metrics
Distributed control scheme for power transformation system based on network agent of VxWorks
LIU Qing-shan, JIANG Xiao-hua(
Journal of Computer Applications    2005, 25 (02): 433-436.   DOI: 10.3724/SP.J.1087.2005.0433
Abstract893)      PDF (198KB)(855)       Save
An automatically-controlled power transformation system was studied and a distributed control scheme for the system was proposed and implemented in order to improve the safety and efficiency of power systems. Advanced software and hardware techniques, system architectures and safety requirements of power transformation systems were all taken into account. Real-time event and disturbance logs and control commands were communicated over a local power transformation system based on the embedded operating system of VxWorks and the hardware platform of PowerPC860 CPU. The distributed control scheme for power transformation systems realized supervisory control, data acquisition and logic functions, and has been validated by industrial experiments.
Related Articles | Metrics